117 research outputs found

    Big Data in Psychology: Introduction to the Special Issue

    Get PDF
    The introduction to this special issue on psychological research involving big data summarizes the highlights of 10 articles that address a number of important and inspiring perspectives, issues, and applications. Four common themes that emerge in the articles with respect to psychological research conducted in the area of big data are mentioned, including: 1. The benefits of collaboration across disciplines, such as those in the social sciences, applied statistics, and computer science. Doing so assists in grounding big data research in sound theory and practice, as well as in affording effective data retrieval and analysis. 2. Availability of large datasets on Facebook, Twitter, and other social media sites that provide a psychological window into the attitudes and behaviors of a broad spectrum of the population. 3. Identifying, addressing, and being sensitive to ethical considerations when analyzing large datasets gained from public or private sources. 4. The unavoidable necessity of validating predictive models in big data by applying a model developed on one dataset to a separate set of data or hold-out sample. Translational abstracts that summarize the articles in very clear and understandable terms are included in Appendix A, and a glossary of terms relevant to big data research discussed in the articles is presented in Appendix B. Keywords: big data, machine learning, statistical learning theory, social media data, digital footprint, decision trees and forests

    Interpreting Multiple Linear Regression: A Guidebook of Variable Importance

    Get PDF
    Multiple regression (MR) analyses are commonly employed in social science fields. It is also common for interpretation of results to typically reflect overreliance on beta weights (cf. Courville & Thompson, 2001; Nimon, Roberts, & Gavrilova, 2010; Zientek, Capraro, & Capraro, 2008), often resulting in very limited interpretations of variable importance. It appears that few researchers employ other methods to obtain a fuller understanding of what and how independent variables contribute to a regression equation. Thus, this paper presents a guidebook of variable importance measures that inform MR results, linking measures to a theoretical framework that demonstrates the complementary roles they play when interpreting regression findings. We also provide a data-driven example of how to publish MR results that demonstrates how to present a more complete picture of the contributions variables make to a regression equation. We end with several recommendations for practice regarding how to integrate multiple variable importance measures into MR analyses

    Estimating Operational Validity Under Incidental Range Restriction: Some Important but Neglected Issues

    Get PDF
    Operational validities are important to personnel selection research because they estimate how well a predictor in practical use correlates with a criterion construct, if the criterion measure were purged of measurement error variance. Because range restriction on a predictor or predictor composite creates incidental range restriction on the criterion, existing methodologies offer limited information and guidance for estimating operational validities. Although these effects of range restriction and criterion unreliability could be corrected with existing equations in a sequential fashion, proper use of sequential correction equations is not always as straightforward as it appears. This research reviews the existing equations for correcting validities, outlines the appropriate method for correcting validity coefficients via sequential equations, and proposes a new equation that performs a combined correction for the effects of incidental range restriction and criterion unreliability

    Scientific, Legal, and Ethical Concerns About AI-Based Personnel Selection Tools: A Call to Action

    Get PDF
    Organizations are increasingly turning toward personnel selection tools that rely on artificial intelligence (AI) technologies and machine learning algorithms that, together, intend to predict the future success of employees better than traditional tools. These new forms of assessment include online games, video-based interviews, and big data pulled from many sources, including test responses, test-taking behavior, applications, resumes, and social media. Speedy processing, lower costs, convenient access, and applicant engagement are often and rightfully cited as the practical advantages for using these selection tools. At the same time, however, these tools raise serious concerns about their effectiveness in terms of their conceptual relevance to the job, their basis in a job analysis to ensure job relevancy, their measurement characteristics (reliability and stability), their validity in predicting employee-relevant outcomes, their evidence and normative information being updated appropriately, and the associated ethical concerns around what information is being represented to employers and told to job candidates. This paper explores these concerns, concluding with an urgent call to industrial and organizational psychologists to extend existing professional standards for employment testing to these new AI and machine learning based forms of testing, including standards and requirements for their documentation

    “Where’s the I-O?” Artificial Intelligence and Machine Learning in Talent Management Systems

    Get PDF
    Artificial intelligence (AI) and machine learning (ML) have seen widespread adoption by organizations seeking to identify and hire high-quality job applicants. Yet the volume, variety, and velocity of professional involvement among I-O psychologists remains relatively limited when it comes to developing and evaluating AI/ML applications for talent assessment and selection. Furthermore, there is a paucity of empirical research that investigates the reliability, validity, and fairness of AI/ML tools in organizational contexts. To stimulate future involvement and research, we share our review and perspective on the current state of AI/ML in talent assessment as well as its benefits and potential pitfalls; and in addressing the issue of fairness, we present experimental evidence regarding the potential for AI/ML to evoke adverse reactions from job applicants during selection procedures. We close by emphasizing increased collaboration among I-O psychologists, computer scientists, legal scholars, and members of other professional disciplines in developing, implementing, and evaluating AI/ML applications in organizational contexts

    Cloud-based Meta-analysis to Bridge Science and Practice: Welcome to metaBUS

    Get PDF
    Although volumes have been written on spanning the science-practice gap in applied psychology, surprisingly few tangible components of that bridge have actually been constructed. We describe the metaBUS platform that addresses three challenges of one gap contributor: information overload. In particular, we describe challenges stemming from: (1) lack of access to research findings, (2) lack of an organizing map of topics studied, and (3) lack of interpretation guidelines for research findings. For each challenge, we show how metaBUS, which provides an advanced search and synthesis engine of currently more than 780,000 findings from 9,000 studies, can provide the building blocks needed to move beyond engineering design phase and toward construction, generating rapid, first-pass meta-analyses on virtually any topic to inform both research and practice. We provide an Internet link to access a preliminary version of the metaBUS interface and provide two brief demonstrations illustrating its functionality

    Applying Task Force Recommendations on Integrating Science and Practice in Health Service Psychology Education

    Get PDF
    The proper role of research skills and training to conduct research in professional psychology education has been controversial throughout the history of the field. An extensive effort was undertaken recently to address that issue and identify ways the field might move forward in a more unified manner. In 2015, the American Psychological Association (APA) Board of Educational Affairs convened a task force to address one of the recommendations made by the Health Service Psychology Education Collaborative in 2013. That recommendation stated that the education and training of health service psychologists (HSPs) include an integrative approach to science and practice that incorporates scientific-mindedness, training in research skills, and goes well beyond merely “consuming” research findings. The task force subsequently developed recommendations related to the centrality of science competencies for HSPs and how these competencies extend beyond training in evidence-based practice. This article discusses the findings of the task force and the implications of its recommendations for education and training in HSP. The challenges and opportunities associated with implementing these recommendations in HSP graduate programs are examined
    • …
    corecore